We love stories, and the stories we love the most tend to support our cherished norms and morals. But our most popular stories also tend to have many gaping plot holes. These are acts which characters could have done instead of what they did do, to better achieve their goals. Not all such holes undermine the morals of these stories, but many do.
Logically, learning of a plot hole that undermines a story’s key morals should make us like that story less. And for a hole that most everyone actually sees, that would in fact happen. This also tends to happen when we notice plot holes in obscure unpopular stories.
But this happens much less often for widely beloved stories, such as Star Wars, if only a small fraction of fans are aware of the holes. While the popularity of the story should make it easier to tell most fans about holes, fans in fact try not to hear, and punish those who tell them. (I’ve noticed this re my sf reviews; fans are displeased to hear beloved stories don’t make sense.)
So most fans remain ignorant of holes, and even fans who know mostly remain fans. They simply forget about the holes, or tell themselves that there probably exist easy hole fixes – variations on the story that lack the holes yet support the same norms and morals. Of course such fans don’t usually actually search for such fixes, they just presume they exist.
Note how this behavior contrasts with typical reactions to real world plans. Consider when someone points out a flaw in our tentative plan for how to drive from A to B, how to get food for dinner, how to remodel the bathroom, or how to apply for a job. If the flaw seems likely to make our plan fail, we seek alternate plans, and are typically grateful to those who point out the flaw. At least if they point out flaws privately, and we haven’t made a big public commitment to plans.
Yes, we might continue with our basic plan if we had good reasons to think that modest plan variations could fix the found flaws. But we wouldn’t simply presume that such variations exist, regardless of flaws. Yet this is mostly what we do for popular story plot holes. Why the different treatment?
A plausible explanation is that we like to love the same stories as others; loving stories is a coordination game. Which is why 34% of movie budgets were spent on marketing in ’07, compared to 1% for the average product. As long as we don’t expect a plot hole to put off most fans, we don’t let it put us off either. And a plausible partial reason to coordinate to love the same stories is that we use stories to declare our allegiance to shared norms and morals. By loving the same stories, we together reaffirm our shared support for such morals, as well as other shared cultural elements.
Now, another way we show our allegiance to shared norms and morals is when we blame each other. We accuse someone of being blameworthy when their behavior fits a shared blame template. Well, unless that person is so allied to us or prestigious that blaming them would come back to hurt us.
These blame templates tend to correlate with destructive behavior that makes for a worse (local) world overall. For example, we blame murder and murder tends to be destructive. But blame templates are not exactly and precisely targeted at making better outcomes. For example, murderers are blamed even when their act makes a better world overall, and we also fail to blame those who fail to murder in such situations.
These deviations make sense if blame templates must have limited complexity, due to being socially shared. To support shared norms and morals, blame templates must be simple enough so most everyone knows what they are, and can agree on if they match particular cases. If the reality of which behaviors are actually helpful versus destructive is more complex than that, well then good behavior in some detailed “hole” cases must be sacrificed, to allow functioning norms/morals.
These deviations between what blame templates actually target, and what they should target to make a better (local) world, can be seen as “blame holes”. Just as a plot may seem to make sense on a quick first pass, with thought and attention required to notice its holes, blame holes are typically not noticed by most who only work hard enough to try to see if a particular behavior fits a blame template. While many are capable of understanding an explanation of where such holes lie, they are not eager to hear about them, and they still usually apply hole-plagued blame templates even when they see their holes. Just like they don’t like to hear about plot holes in their favorite stories, and don’t let such holes keep them from loving those stories.
For example, a year ago I asked a Twitter poll on the chances that the world would have been better off overall had Nazis won WWII. 44% said that chance was over 10% (the highest category offered). My point was that history is too uncertain to be very sure of the long term aggregate consequences of such big events, even when we are relatively sure about which acts tend to promote good.
Many then said I was evil, apparently seeing me as fitting the blame template of “says something positive about Nazis, or enables/encourages others to do so.” I soon after asked a poll that found only 20% guessing it was more likely than not that the author of such a poll actually wishes Nazis had won WWII. But the other 80% might still feel justified in loudly blaming me, if they saw my behavior as fitting a widely accepted blame template. I could be blamed regardless of the factual truth of what I said or intended.
Recently many called Richard Dawkins evil for apparently fitting the template “says something positive about eugenics” when he said that eugenics on humans would “work in practice” because “it works for cows, horses, pigs, dogs & roses”. To many, he was blameworthy regardless of the factual nature or truth of his statement. Yes, we might do better to instead use the blame template “endorses eugenics”, but perhaps too few are capable in practice of distinguishing “endorses” from “says something positive about”. At least maybe most can’t reliably do that in their usual gossip mode of quickly reading and judging something someone said.
On reflection, I think a great deal of our inefficient behavior and policies can be explained via limited-complexity blame templates. For example, consider the template:
Blame X if X interacts with Y on dimension D, Y suffers on D, no one should suffer on D, and X “could have” interacted so as to reduce that suffering more.
So, blame X who hires Y for a low wage, risky, or unpleasant job. Blame X who rents a high price or peeling paint room to Y. Blame food cart X that sells unsavory or unsafe food to Y. Blame nation X that lets in immigrant Y who stays poor afterward. Blame emergency room X who failed to help arriving penniless sick Y. Blame drug dealer X who sells drugs to poor, sick, or addicted Y. Blame client X who buys sex, an organ, or a child from Y who would not sell it if they were much richer.
So a simple blame template can help explain laws on min wages, max rents, job & room quality regs, food quality rules, hospital care rules, and laws prohibiting drugs, organ sales, and prostitution. Yes, by learning simple economics many are capable of seeing that these rules can actually make targets Y worse off, via limiting their options. But if they don’t expect others to see this, they still tend to apply the usual blame templates. Because blame templates are socially shared, and we each tend to be punished from deviating from them, either by violating them, or failing to disapprove of violators.
In another post soon I hope to say more about the role of, and limits on, simplified blame templates. For this post, I’m content to just note their central causal roles.
Added 8am: Another key blame template happens in hierarchical organizations. When something bad seems to happen to a division, the current leader takes all the blame, even if recently replaced prior leader. Rising stars gain by pushing short term gains at the expense of long term losses, and being promoted fast enough so as not to be blamed for those losses.
Re my deliberate exposure proposal, many endorse a norm that those who propose policies intended to combine good and bad effects should immediately cause themselves to suffer the worst possible bad effects personally, even in the absence of implementing their proposal. Poll majorities, however, don’t support such norms.
If you make the rule "will dismiss any arguments that are even vaguely for eugenics", your time and attention doesn't get attacked by bad actors/ideologues trying to creep up on eugenicist policies via rationalizations. Any loss of value from productive disagreement might be counterbalanced by the gain in personal resources.
To bring this closer to home: suppose a group of people at your university have discovered a compound called KowPis that will make people have powerful spiritual experiences but also cause a really bad toxic reaction in a lot of people. They are trying to lobby for it to be added to the food served in the cafeteria you eat at everyday.
You really don't want this to happen because you don't care about seeing God or whatever it is they claim, and the downsides seem pretty bad. However you see KowPis junkies publishing poorly researched essays arguing that there are no downsides. You see such essays being circulated around. You're not great at figuring out which ones are right or wrong because you're not in the field and it's absurdly complicated. It scares you to hell that these people might convince everyone that there's nothing wrong with the compound and put it in your food. Your best friend Bryan has become a KowPis evangelizer, coming up with clever enough rationalizations to fool the average man, like he did for blackmail. You know deep down it stems from his irrational latent theism, but none of you bring it up because it's considered ad hom.
What do you do? It's eating up all your time to pore over paper after paper on KowPis's effects on health in a subject you have no interest in or talent for. You have to do it anyway because you know only rational argument will save you and many others like you. It seems incredibly unfair that someone can threaten your livelihood if you don't spend time researching their subject. It seems incredibly unfair that you might find perfect refutations, do a checkmate, but still have no one listen to you (like it has happened many times before), and have your frickin' food poisoned.
Or... you can choose to look at these misinformation-mongers for what they are, or rather most of them are, and commit to responding like you would to violence. You can value your time and attention and personal research goals. You don't reason with someone committed to fighting you. If a crowd of such people try to attack, you're not going to stop to reason with some of them because they seem more rational than the rest. It seems fair to blanket-ban all of them.
If you make the rule "will dismiss any arguments that are even vaguely for eugenics", your time and attention doesn't get attacked by bad actors/ideologues trying to creep up on eugenicist policies via rationalizations. Any loss of value from productive disagreement might be counterbalanced by the gain in personal resources.
To bring this closer to home: suppose a group of people at your university have discovered a compound called KowPis that will make people have powerful spiritual experiences but also cause a really bad toxic reaction in a lot of people. They are trying to lobby for it to be added to the food served in the cafetria you eat at everyday.
You really don't want this to happen because you don't care about seeing God or whatever it is they claim, and the downsides seem pretty bad. However you see KowPis junkies publishing poorly researched essays arguing that there are no downsides. You see such essays being circulated around. You're not great at figuring out which ones are right or wrong because you're not in the field and it's absurdly complicated. It scares you to hell that these people might convince everyone that there's nothing wrong with the compound and put it in your food. Your best friend Bryan has become a KowPis evangelizer, coming up with clever enough rationalizations to fool the average man, like he did for blackmail. You know deep down it stems from his irrational latent theism, but none of you bring it up because it's considered ad hom.
What do you do? It's eating up all your time to pore over paper after paper on KowPis's effects on health in a subject you have no interest in or talent for. You have to do it anyway because you know only rational argument will save you and many others like you. It seems incredibly unfair that someone can threaten your livelihood if you don't spend time researching their subject. It seems incredibly unfair that you might find perfect refutations, do a checkmate, but still have no one listen to you (like it has happened many times before), and have your frickin' food poisoned.
Or... you can choose to look at these misinformation-mongers for what they are, or rather most of them are, and commit to responding like you would to violence. You can value your time and attention and personal research goals. You don't reason with someone committed to fighting you. If a crowd of such people try to attack, you're not going to stop to reason with some of them because they seem more rational than the rest. It seems fair to blanket-ban all of them.